在本文中,我们提出了一个多模式的多关系学习框架,针对视听语音分离的任务。尽管以前的努力已经在结合音频和视觉方式方面进行了广泛的努力,但其中大多数仅采用音频和视觉功能的直接串联。为了利用这两种方式背后的实际有用信息,我们定义了两个关键相关性,即:(1)身份相关性(在音色和面部属性之间); (2)语音相关性(音素和唇部运动之间)。这两种相关性共同包含完整的信息,这表明将目标扬声器的声音分开,尤其是在某些困难的情况下,例如相同的性别或类似内容。为了实施,采用对比度学习或对抗性训练方法来最大化这两个相关性。他们俩都表现良好,而对抗性训练则通过避免对比度学习的某些局限性显示出其优势。与先前的研究相比,我们的解决方案证明了对实验指标的明显改进而没有额外的复杂性。进一步的分析揭示了拟议的体系结构的有效性及其未来扩展的良好潜力。
translated by 谷歌翻译
自数据注释(尤其是对于大型数据集)以来,使用嘈杂的标签学习引起了很大的研究兴趣,这可能不可避免地不可避免。最近的方法通过将培训样本分为清洁和嘈杂的集合来求助于半监督的学习问题。然而,这种范式在重标签噪声下容易出现重大变性,因为干净样品的数量太小,无法进行常规方法。在本文中,我们介绍了一个新颖的框架,称为LC-Booster,以在极端噪音下明确处理学习。 LC-Booster的核心思想是将标签校正纳入样品选择中,以便可以通过可靠的标签校正来培训更纯化的样品,从而减轻确认偏差。实验表明,LC-Booster在几个嘈杂标签的基准测试中提高了最先进的结果,包括CIFAR-10,CIFAR-100,CLASTINGING 1M和WEBVISION。值得注意的是,在极端的90 \%噪声比下,LC-Booster在CIFAR-10和CIFAR-100上获得了92.9 \%和48.4 \%的精度,超过了最终方法,较大的边距就超过了最终方法。
translated by 谷歌翻译
我们专注于创建强化学习代理的任务,这是固有的解释 - 能够通过大声思考,在执行任务并分析后HOC后产生因果解释的整个轨迹来产生直接的当地解释。这种分层解释的加强学习代理(Hex-RL),以互动虚构,基于文本的游戏环境运营,其中代理人使用文本自然语言对世界感知和行为。这些游戏通常被构造为具有长期依赖的谜题或任务,其中代理商必须完成一系列行动,以便在其中提供理想的环境,以测试代理商解释其行为的能力。我们的代理旨在使用基于提取的符号知识图形的状态表示来处理作为一流的公民的可解释性,其与分层图注意机制耦合,该方法指向大多数影响行动选择的内部图形表示中的事实。实验表明,该代理提供了对强强基线的显着改进的解释,这是人类参与者通常不熟悉环境的评分,同时也匹配最先进的任务表现。
translated by 谷歌翻译
自动化讲故事长期以来一直抓住了研究人员在日常生活中的叙述中的难以感受。但是,在用神经语言模型产生叙述时,保持一致性并保持对特定结束的特定结束挑战。在本文中,我们介绍了读者模型(Storm)的故事生成,这是一个框架,其中读者模型用于推理故事的推理应该进步。读者模型是人类读者相信关于虚构故事世界的概念,实体和关系的人。我们展示了如何作为知识图表所代表的明确读者模型提供故事一致性,并以实现给定的故事世界目标的形式提供可控性。实验表明,我们的模型产生了显着更加连贯和主题的故事,优于尺寸的基线,包括情节合理性并保持主题。我们的系统也优于在未订购的情况下在组成给定概念时占总引导的故事生成基线。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译